This paper proposes a graph-based approach to representing spatio-temporal trajectory data that allows an effective visualization and characterization of city-wide traffic dynamics. With the advance of sensor, mobile, and Internet of Things (IoT) technologies, vehicle and passenger trajectories are being increasingly collected on a massive scale and are becoming a critical source of insight into traffic pattern and traveller behaviour. To leverage such trajectory data to better understand traffic dynamics in a large-scale urban network, this study develops a trajectory-based network traffic analysis method that converts individual trajectory data into a sequence of graphs that evolve over time (known as dynamic graphs or time-evolving graphs) and analyses network-wide traffic patterns in terms of a compact and informative graph-representation of aggregated traffic flows. First, we partition the entire network into a set of cells based on the spatial distribution of data points in individual trajectories, where the cells represent spatial regions between which aggregated traffic flows can be measured. Next, dynamic flows of moving objects are represented as a time-evolving graph, where regions are graph vertices and flows between them are treated as weighted directed edges. Given a fixed set of vertices, edges can be inserted or removed at every time step depending on the presence of traffic flows between two regions at a given time window. Once a dynamic graph is built, we apply graph mining algorithms to detect change-points in time, which represent time points where the graph exhibits significant changes in its overall structure and, thus, correspond to change-points in city-wide mobility pattern throughout the day (e.g., global transition points between peak and off-peak periods).
translated by 谷歌翻译
许多网络攻击始于传播网络钓鱼URL。在单击这些网络钓鱼URL时,受害者的私人信息会泄漏给攻击者。已经提出了几种机器学习方法来检测网络钓鱼URL。然而,检测出逃避的网络钓鱼URL,即通过操纵模式假装良性的网络钓鱼URL仍然尚未探索。在许多情况下,攻击者i)重复使用网络钓鱼网页,因为制造完全全新的套装成本非平凡费用,ii)偏爱不需要私人信息并且比其他人更便宜的托管公司,iii)喜欢共享的托管服务成本效率和IV)有时使用良性域,IP地址和URL字符串模式来逃避现有的检测方法。受这些行为特征的启发,我们提出了一种基于网络的推理方法,以准确检测具有合法模式的网络钓鱼URL,即逃避稳定。在网络方法中,即使在逃避之后,网络钓鱼URL仍将被确定为网络师,除非同时逃避网络中的大多数邻居。我们的方法始终在各种实验测试中显示出更好的检测性能,而不是最先进的方法,例如,对于我们的方法,对于最佳功能方法而言,我们的方法为0.89,而0.84。
translated by 谷歌翻译
我们提出了一个新颖的半监督学习框架,该框架巧妙地利用了模型的预测,从两个强烈的图像观点中的预测之间的一致性正则化,并由伪标签的信心加权,称为conmatch。虽然最新的半监督学习方法使用图像的弱和强烈的观点来定义方向的一致性损失,但如何为两个强大的观点之间的一致性定义定义这种方向仍然没有探索。为了解决这个问题,我们通过弱小的观点作为非参数和参数方法中的锚点来提出从强大的观点中对伪标签的新颖置信度度量。特别是,在参数方法中,我们首次介绍了伪标签在网络中的信心,该网络的信心是以端到端方式通过骨干模型学习的。此外,我们还提出了阶段训练,以提高培训的融合。当纳入现有的半监督学习者中时,并始终提高表现。我们进行实验,以证明我们对最新方法的有效性并提供广泛的消融研究。代码已在https://github.com/jiwoncocoder/conmatch上公开提供。
translated by 谷歌翻译
本文通过组合有限的交通量和车辆轨迹数据来解决估计道路网络中链接流的问题。虽然循环检测器的流量量数据是链路流估计的常见数据源,但检测器仅涵盖链接的子集。如今,还合并了从车辆跟踪传感器收集的车辆轨迹数据。然而,轨迹数据通常很少,因为观察到的轨迹仅代表整个种群的一小部分,其中确切的采样率未知,并且可能在时空和时间上有所不同。这项研究提出了一个新颖的生成建模框架,在其中我们使用马尔可夫决策过程框架制定了车辆的链接到连接运动作为顺序决策问题,并训练代理商做出顺序决策以生成逼真的合成车辆轨迹。我们使用加强学习(RL)的方法来找到代理的最佳行为,基于哪些合成人口车辆轨迹可以生成以估算整个网络中的连接流。为了确保生成的人口车辆轨迹与观察到的交通量和轨迹数据一致,提出了两种基于逆强化学习和约束强化学习的方法。通过解决真实的道路网络中的链路流估计问题,通过这些基于RL的方法中的任何一个求解的提出的生成建模框架都可以验证。此外,我们执行全面的实验,以将性能与两种现有方法进行比较。结果表明,在现实情况下,提出的框架具有较高的估计准确性和鲁棒性,在现实情况下,未满足有关驾驶员的某些行为假设或轨迹数据的网络覆盖范围和渗透率较低。
translated by 谷歌翻译
Attempts to train a comprehensive artificial intelligence capable of solving multiple tasks have been impeded by a chronic problem called catastrophic forgetting.Although simply replaying all previous data alleviates the problem, it requires large memory and even worse, often infeasible in real world applications where the access to past data is limited. Inspired by the generative nature of the hippocampus as a short-term memory system in primate brain, we propose the Deep Generative Replay, a novel framework with a cooperative dual model architecture consisting of a deep generative model ("generator") and a task solving model ("solver"). With only these two models, training data for previous tasks can easily be sampled and interleaved with those for a new task. We test our methods in several sequential learning settings involving image classification tasks.
translated by 谷歌翻译
While humans easily recognize relations between data from different domains without any supervision, learning to automatically discover them is in general very challenging and needs many ground-truth pairs that illustrate the relations. To avoid costly pairing, we address the task of discovering cross-domain relations given unpaired data. We propose a method based on generative adversarial networks that learns to discover relations between different domains (DiscoGAN). Using the discovered relations, our proposed network successfully transfers style from one domain to another while preserving key attributes such as orientation and face identity.
translated by 谷歌翻译
We present a highly accurate single-image superresolution (SR) method. Our method uses a very deep convolutional network inspired by VGG-net used for ImageNet classification [19]. We find increasing our network depth shows a significant improvement in accuracy. Our final model uses 20 weight layers. By cascading small filters many times in a deep network structure, contextual information over large image regions is exploited in an efficient way. With very deep networks, however, convergence speed becomes a critical issue during training. We propose a simple yet effective training procedure. We learn residuals only and use extremely high learning rates (10 4 times higher than SRCNN [6]) enabled by adjustable gradient clipping. Our proposed method performs better than existing methods in accuracy and visual improvements in our results are easily noticeable.
translated by 谷歌翻译
Graph convolutional neural networks (GCNs) have emerged as a key technology in various application domains where the input data is relational. A unique property of GCNs is that its two primary execution stages, aggregation and combination, exhibit drastically different dataflows. Consequently, prior GCN accelerators tackle this research space by casting the aggregation and combination stages as a series of sparse-dense matrix multiplication. However, prior work frequently suffers from inefficient data movements, leaving significant performance left on the table. We present GROW, a GCN accelerator based on Gustavson's algorithm to architect a row-wise product based sparse-dense GEMM accelerator. GROW co-designs the software/hardware that strikes a balance in locality and parallelism for GCNs, achieving significant energy-efficiency improvements vs. state-of-the-art GCN accelerators.
translated by 谷歌翻译
训练与人交往的机器人具有挑战性。直接让人们参与培训过程是昂贵的,这需要大量的数据样本。本文提出了解决此问题的另一种方法。我们提出了一个人类路径预测网络(HPPN),该网络基于连续的神经网络结构来基于连续机器人动作和人类响应生成用户的未来轨迹。随后,出现了一种基于进化 - 策略的机器人训练方法,仅使用使用HPPN生成的虚拟人类运动。证明我们提出的方法允许对视力受损的人进行机器人指南的样品培训。通过仅收集来自真实用户的1.5 K剧集,我们能够训练HPPN并生成训练机器人所需的100 k个虚拟剧集。训练有素的机器人精确地沿着目标路径蒙住眼睛。此外,使用虚拟情节,我们研究了一种新的奖励设计,该设计在机器人的指导中优先考虑人类的舒适性,而不会产生额外的费用。预计这种样品效率的训练方法将广泛适用于未来与人体互动的机器人。
translated by 谷歌翻译
The 3D-aware image synthesis focuses on conserving spatial consistency besides generating high-resolution images with fine details. Recently, Neural Radiance Field (NeRF) has been introduced for synthesizing novel views with low computational cost and superior performance. While several works investigate a generative NeRF and show remarkable achievement, they cannot handle conditional and continuous feature manipulation in the generation procedure. In this work, we introduce a novel model, called Class-Continuous Conditional Generative NeRF ($\text{C}^{3}$G-NeRF), which can synthesize conditionally manipulated photorealistic 3D-consistent images by projecting conditional features to the generator and the discriminator. The proposed $\text{C}^{3}$G-NeRF is evaluated with three image datasets, AFHQ, CelebA, and Cars. As a result, our model shows strong 3D-consistency with fine details and smooth interpolation in conditional feature manipulation. For instance, $\text{C}^{3}$G-NeRF exhibits a Fr\'echet Inception Distance (FID) of 7.64 in 3D-aware face image synthesis with a $\text{128}^{2}$ resolution. Additionally, we provide FIDs of generated 3D-aware images of each class of the datasets as it is possible to synthesize class-conditional images with $\text{C}^{3}$G-NeRF.
translated by 谷歌翻译